5 research outputs found

    On the Complementarity of Face Parts for Gender Recognition

    Get PDF
    This paper evaluates the expected complementarity between the most prominent parts of the face for the gender recognition task. Given the image of a face, five important parts (right and left eyes, nose, mouth and chin) are extracted and represented as appearance-based data vectors. In addition, the full face and its internal rectangular region (excluding hair, ears and contour) are also coded. Several mixtures of classifiers based on (subsets of) these five single parts were designed using simple voting, weighted voting and other learner as combiners. Experiments using the FERET database prove that ensembles perform significantly better than plain classifiers based on single parts (as expected)

    The Role of Face Parts in Gender Recognition

    Get PDF
    This paper evaluates the discriminant capabilities of face parts in gender recognition. Given the image of a face, a number of subimages containing the eyes, nose, mouth, chin, right eye, internal face (eyes, nose, mouth, chin), external face (hair, ears, contour) and the full face are extracted and represented as appearance-based data vectors. A greater number of face parts from two databases of face images (instead of only one) were considered with respect to previous related works, along with several classification rules. Experiments proved that single face parts offer enough information to allow discrimination between genders with recognition rates that can reach 86%, while classifiers based on the joint contribution of internal parts can achieve rates above 90%. The best result using the full face was similar to those reported in general papers of gender recognition (>95%). A high degree of correlation was found among classifiers as regards their capacity to measure the relevance of face parts, but results were strongly dependent on the composition of the database. Finally, an evaluation of the complementarity between discriminant information from pairs of face parts reveals a high potential to define effective combinations of classifiers

    Face gender classification: A statistical study when neutral and distorted faces are combined for training and testing purposes

    Get PDF
    This paper presents a thorough study of gender classification methodologies performing on neutral, expressive and partially occluded faces, when they are used in all possible arrangements of training and testing roles. A comprehensive comparison of two representation approaches (global and local), three types of features (grey levels, PCA and LBP), three classifiers (1-NN, PCA + LDA and SVM) and two performance measures (CCR and d′) is provided over single- and cross-database experiments. Experiments revealed some interesting findings, which were supported by three non-parametric statistical tests: when training and test sets contain different types of faces, local models using the 1-NN rule outperform global approaches, even those using SVM classifiers; however, with the same type of faces, even if the acquisition conditions are diverse, the statistical tests could not reject the null hypothesis of equal performance of global SVMs and local 1-NNs

    Mirror mirror on the wall... an unobtrusive intelligent multisensory mirror for well-being status self-assessment and visualization

    Get PDF
    A person’s well-being status is reflected by their face through a combination of facial expressions and physical signs. The SEMEOTICONS project translates the semeiotic code of the human face into measurements and computational descriptors that are automatically extracted from images, videos and 3D scans of the face. SEMEOTICONS developed a multisensory platform in the form of a smart mirror to identify signs related to cardio-metabolic risk. The aim was to enable users to self-monitor their well-being status over time and guide them to improve their lifestyle. Significant scientific and technological challenges have been addressed to build the multisensory mirror, from touchless data acquisition, to real-time processing and integration of multimodal data

    Face gender classification under realistic conditions. Dealing with neutral, expressive and partially occluded faces

    Get PDF
    Esta tesis se centra en la clasificación de género a partir de imágenes faciales tratando el problema con un enfoque más realista que el tradicionalmente utilizado en la literatura. En entornos reales, pueden surgir varios problemas debido a la falta de control sobre los sujetos y su entorno. Además es probable que las características de los individuos, como son su edad y raza, varíen significativamente. Al mismo tiempo, los sujetos pueden manifestar sus emociones mediante expresiones faciales así como llevar puestos complementos que cubran partes de su cara, lo cual provoca que las imágenes faciales contengan ciertas distorsiones. Estos son los principales problemas, junto con otras complicaciones como las causadas por cambios de iluminación y detecciones imprecisas de la cara, que abordamos en este trabajo. Comenzamos estudiando la posibilidad de clasificar el género dadas partes de la cara, como son los ojos, la nariz, la boca y el mentón. A partir de los resultados experimentales que se obtuvieron utilizando dos bases de datos de imágenes faciales, concluimos que los ojos eran la región de la cara que proporcionaba resultados más robustos y que distintas partes de la cara contienen información complementaria sobre el género de la persona. Seguidamente, propusimos un nuevo tipo de características locales y un método de clasificación basado en vecindades. Las características propuestas se basan en valores de contraste locales, aunque manteniendo información espacial. El método de clasificación consiste en una combinación de clasificadores donde cada clasificador base se especializa en una región concreta de la cara. Ambas propuestas se compararon con las técnicas más utilizadas en este campo mediante un completo análisis experimental utilizando imágenes de caras neutras y expresivas y también imágenes de caras con gafas de sol y bufandas. Los resultados empíricos indican que todas las soluciones resuelven la tarea de forma estadísticamente equivalente cuando las imágenes de entrenamiento y test tienen las mismas características. Sin embargo, cuando los conjuntos de entrenamiento y test contienen imágenes de distintos tipos, nuestras propuestas muestran un comportamiento más robusto que el resto. Por último, presentamos un estudio estadístico de la influencia de la resolución de las imágenes en la clasificación de género. Los resultados mostraron que las resoluciones óptimas están entre 22x18 y 90x72 píxeles. Sin embargo, imágenes de sólo 3x2 píxeles proporcionan información útil para comenzar a distinguir entre géneros
    corecore